224 research outputs found

    Assessing the Incremental Value of Option Pricing Theory Relative to an "Informationally Passive" Benchmark

    Get PDF
    In modern finance, the value of an active investment strategy is measured by comparing its performance against the benchmark of passively holding the market portfolio and the risk less asset. We wish to evaluate the marginal contribution of a theoretical derivatives pricing model in the same way, by comparing its performance against an "informationally passive" alternative model. All rationally priced options must satisfy a number of conditions to rule out profitable static arbitrage. The Black-Scholes model, and others like it, are obtained by assuming an equilibrium in which there are no profitable dynamic arbitrage opportunities either. The passive model we consider incorporates only the fundamental properties of option prices that must hold to avoid static arbitrage, but has no theoretical content beyond that. We review different measures of model performance and apply them to several versions of the Black-Scholes model and our passive model. As with active portfolio management, it turns out to be not that easy for an "active" model to do a lot better than a well designed passive alternative. For example, "classical" Black-Scholes model turns out to be less accurate than the passive benchmark

    Option Investor Rationality Revisited

    Get PDF
    Do option investors rationally exercise their options? Numerous studies report evidence of irrational behavior. In this paper, we pay careful attention to intraday option quotes and reach the opposite conclusion. An exercise boundary violation (EBV) occurs when the best bid price for an American option is below the option’s intrinsic value. Far from being unusual, we show that EBVs occur very frequently. Under these conditions, the rational response of an investor liquidating an option is to exercise the option rather than sell it. Empirically, we find that the likelihood of early exercise is strongly influenced by the existence and duration of EBVs. Not only do these results reverse standard theory on American option valuation and optimal exercise strategy, but they also suggest that the ability to avoid selling at an EBV price creates an additional source of value for American options that is unrelated, and in addition to, dividend payments. This additional value may help explain why American options appear overpriced relative to European options

    Is the "Leverage Effect" a Leverage Effect?

    Get PDF
    The "leverage effect" refers to the well-established relationship between stock returns and both implied and realized volatility: volatility increases when the stock price falls. A standard explanation ties the phenomenon to the effect a change in market valuation of a firm's equity has on the degree of leverage in its capital structure, with an increase in leverage producing an increase in stock volatility. We use both returns and directly measured leverage to examine this hypothetical explanation for the "leverage effect" as it applies to the individual stocks in the S&P100 (OEX) index, and to the index itself. We find a strong "leverage effect" associated with falling stock prices, but also numerous anomalies that call into question leverage changes as the explanation. These include the facts that the effect is much weaker or nonexistent when positive stock returns reduce leverage; it is too small with measured leverage for individual firms, but much too large for OEX implied volatilities; the volatility change associated with a given change in leverage seems to die out over a few months; and there is no apparent effect on volatility when leverage changes because of a change in outstanding debt or shares, only when stock prices change. In short, our evidence suggests that the "leverage effect" is really a "down market effect" that may have little direct connection to firm leverage

    Is the "Leverage Effect" a Leverage Effect?

    Get PDF
    The "leverage effect" refers to the well-established relationship between stock returns and both implied and realized volatility: volatility increases when the stock price falls. A standard explanation ties the phenomenon to the effect a change in market valuation of a firm's equity has on the degree of leverage in its capital structure, with an increase in leverage producing an increase in stock volatility. We use both returns and directly measured leverage to examine this hypothetical explanation for the "leverage effect" as it applies to the individual stocks in the S&P100 (OEX) index, and to the index itself. We find a strong "leverage effect" associated with falling stock prices, but also numerous anomalies that call into question leverage changes as the explanation. These include the facts that the effect is much weaker or nonexistent when positive stock returns reduce leverage; it is too small with measured leverage for individual firms, but much too large for OEX implied volatilities; the volatility change associated with a given change in leverage seems to die out over a few months; and there is no apparent effect on volatility when leverage changes because of a change in outstanding debt or shares, only when stock prices change. In short, our evidence suggests that the "leverage effect" is really a "down market effect" that may have little direct connection to firm leverage

    Estimating the Implied Risk Neutral Density

    Get PDF
    The market's risk neutral probability distribution for the value of an asset on a future date can be extracted from the prices of a set of options that mature on that date, but two key technical problems arise. In order to obtain a full well-behaved density, the option market prices must be smoothed and interpolated, and some way must be found to complete the tails beyond the range spanned by the available options. This paper develops an approach that solves both problems, with a combination of smoothing techniques from the literature modified to take account of the market's bid-ask spread, and a new method of completing the density with tails drawn from a Generalized Extreme Value distribution. We extract twelve years of daily risk neutral densities from S&P 500 index options and find that they are quite different from the lognormal densities assumed in the Black-Scholes framework, and that their shapes change in a regular way as the underlying index moves. Our approach is quite general and has the potential to reveal valuable insights about how information and risk preferences are incorporated into prices in many financial markets

    Estimation Error in the Assessment of Financial Risk Exposure

    Get PDF
    Value at Risk and similar measures of financial risk exposure require predicting the tail of an asset returns distribution. Assuming a specific form, such as the normal, for the distribution, the standard deviation (and possibly other parameters) are estimated from recent historical data and the tail cutoff value is computed. But this standard procedure ignores estimation error, which we find to be substantial even under the best of conditions. In practice, a "tail event" may represent a truly rare occurrence, or it may simply be a not-so-rare occurrence at a time when the predicted volatility underestimates the true volatility, due to sampling error. This problem gets worse the further in the tail one is trying to predict. Using a simulation of 10,000 years of daily returns, we first examine estimation risk when volatility is an unknown constant parameter. We then consider the more realistic, but more problematical, case of volatility that drifts stochastically over time. This substantially increases estimation error, although strong mean reversion in the variance tends to dampen the effect. Non-normal fat-tailed return shocks makes overall risk assessment much worse, especially in the extreme tails, but estimation error per se does not add much beyond the effect of tail fatness. Using an exponentially weighted moving average to downweight older data hurts accuracy if volatility is constant or only slowly changing. But with more volatile variance, an optimal decay rate emerges, with better performance for the most extreme tails being achieved using a relatively greater rate of downweighting. We first simulate non-overlapping independent samples, but in practical risk management, risk exposure is estimated day by day on a rolling basis. This produces strong autocorrelation in the estimation errors, and bunching of apparently extreme events. We find that with stochastic volatility, estimation error can increase the probabilities of multi-day events, like three 1% tail events in a row, by several orders of magnitude. Finally, we report empirical results using 40 years of daily S&P 500 returns which confirm that the issues we have examined in simulations are also present in the real world

    Estimating the Implied Risk Neutral Density

    Get PDF
    The market's risk neutral probability distribution for the value of an asset on a future date can be extracted from the prices of a set of options that mature on that date, but two key technical problems arise. In order to obtain a full well-behaved density, the option market prices must be smoothed and interpolated, and some way must be found to complete the tails beyond the range spanned by the available options. This paper develops an approach that solves both problems, with a combination of smoothing techniques from the literature modified to take account of the market's bid-ask spread, and a new method of completing the density with tails drawn from a Generalized Extreme Value distribution. We extract twelve years of daily risk neutral densities from S&P 500 index options and find that they are quite different from the lognormal densities assumed in the Black-Scholes framework, and that their shapes change in a regular way as the underlying index moves. Our approach is quite general and has the potential to reveal valuable insights about how information and risk preferences are incorporated into prices in many financial markets

    Estimation Error in the Assessment of Financial Risk Exposure

    Get PDF
    Value at Risk and similar measures of financial risk exposure require predicting the tail of an asset returns distribution. Assuming a specific form, such as the normal, for the distribution, the standard deviation (and possibly other parameters) are estimated from recent historical data and the tail cutoff value is computed. But this standard procedure ignores estimation error, which we find to be substantial even under the best of conditions. In practice, a "tail event" may represent a truly rare occurrence, or it may simply be a not-so-rare occurrence at a time when the predicted volatility underestimates the true volatility, due to sampling error. This problem gets worse the further in the tail one is trying to predict. Using a simulation of 10,000 years of daily returns, we first examine estimation risk when volatility is an unknown constant parameter. We then consider the more realistic, but more problematical, case of volatility that drifts stochastically over time. This substantially increases estimation error, although strong mean reversion in the variance tends to dampen the effect. Non-normal fat-tailed return shocks makes overall risk assessment much worse, especially in the extreme tails, but estimation error per se does not add much beyond the effect of tail fatness. Using an exponentially weighted moving average to downweight older data hurts accuracy if volatility is constant or only slowly changing. But with more volatile variance, an optimal decay rate emerges, with better performance for the most extreme tails being achieved using a relatively greater rate of downweighting. We first simulate non-overlapping independent samples, but in practical risk management, risk exposure is estimated day by day on a rolling basis. This produces strong autocorrelation in the estimation errors, and bunching of apparently extreme events. We find that with stochastic volatility, estimation error can increase the probabilities of multi-day events, like three 1% tail events in a row, by several orders of magnitude. Finally, we report empirical results using 40 years of daily S&P 500 returns which confirm that the issues we have examined in simulations are also present in the real world

    Forecasting Volatility Using Historical Data

    Get PDF
    Applying modern option valuation theory requires the user to forecast the volatility of the underlying asset over the remaining life of the option, a formidable estimation problem for long maturity instruments. The standard statistical procedures using historical data are based on assumptions of stability, either constant variance, or constant parameters of the variance process, that are unlikely to hold over long periods. This paper examines the empirical performance of different historical variance estimators and of the GARCH (1,1) model for forecasting volatility in important financial markets over horizons up to five years. We find several surprising results: In general, historical volatility computed over many past periods provides the most accurate forecasts for both long and short horizons; root mean squared forecast errors are substantially lower for long term than for short term volatility forecasts; it is typically better to compute volatility around an assumed mean of zero than around the realized mean in the data sample, and the GARCH model tends to be less accurate and much harder to use than the simple historical volatility estimator for this application

    Remembering Fischer Black

    Get PDF
    • …
    corecore